翻訳と辞書
Words near each other
・ Information silo
・ Information society
・ Information Society (album)
・ Information Society (band)
・ Information society (disambiguation)
・ Information Society discography
・ Information Society Project
・ Information source
・ Information source (mathematics)
・ Information Sources in Law
・ Information space
・ Information space analysis
・ Information strategist
・ Information structure
・ Information subsidy
Information gain in decision trees
・ Information gain ratio
・ Information gap
・ Information gap task
・ Information Gateway Services
・ Information Gathering Satellite
・ Information geometry
・ Information Global Service
・ Information good
・ Information governance
・ Information grazing
・ Information Harvesting
・ Information held under Section 142 of the Education Act 2002
・ Information hiding
・ Information history


Dictionary Lists
翻訳と辞書 辞書検索 [ 開発暫定版 ]
スポンサード リンク

Information gain in decision trees : ウィキペディア英語版
Information gain in decision trees

In information theory and machine learning, information gain is a synonym for ''Kullback–Leibler divergence''. However, in the context of decision trees, the term is sometimes used synonymously with mutual information, which is the expectation value of the Kullback–Leibler divergence of a conditional probability distribution.
In particular, the information gain about a random variable ''X'' obtained from an observation that a random variable ''A'' takes the value ''A=a'' is the Kullback-Leibler divergence ''D''KL(''p''(''x'' | ''a'') || ''p''(''x'' | I)) of the prior distribution ''p''(''x'' | I) for x from the posterior distribution ''p''(''x'' | ''a'') for ''x'' given ''a''.
The expected value of the information gain is the mutual information ''I(X; A)'' of ''X'' and ''A'' – i.e. the reduction in the entropy of ''X'' achieved by learning the state of the random variable ''A''.
In machine learning, this concept can be used to define a preferred sequence of attributes to investigate to most rapidly narrow down the state of ''X''. Such a sequence (which depends on the outcome of the investigation of previous attributes at each stage) is called a decision tree. Usually an attribute with high mutual information should be preferred to other attributes.
==General definition==
In general terms, the expected information gain is the change in information entropy H from a prior state to a state that takes some information as given:
IG(T,a) = H(T) - H(T|a)

抄文引用元・出典: フリー百科事典『 ウィキペディア(Wikipedia)
ウィキペディアで「Information gain in decision trees」の詳細全文を読む



スポンサード リンク
翻訳と辞書 : 翻訳のためのインターネットリソース

Copyright(C) kotoba.ne.jp 1997-2016. All Rights Reserved.